2,248 research outputs found
Recommended from our members
Improving parallel program performance using critical path analysis
A programming tool that performs analysis of critical paths for parallel programs has been developed. This tool determines the critical path for the program as scheduled onto a parallel computer with P processing elements, the critical path for the program expressed as a data flow graph (when maximal parallelism can be expressed), and the minimum number of processing elements (P_opt) needed to obtain maximum program speedup. Experiments were performed using several versions of a Gaussian elimination program to examine how speedup varied with changes in granularity and critical path length. These experiments showed that when the available numer of processing elements P < P_opt, increasing granularity improved program speedup more than reducing (the data flow graph's) critical path length, whereas when P ≥ P_opt, increasing granularity degraded program speedup while reducing critical path length improved program speedup
Recommended from our members
Programming enviroments for parallel programming
Programming environments are used to bridge the gap between actual computers ad developement of their application programs. Most parallel programming environments currently in use focus on a specific parallel programming tool. This paper examines programming environments. language, tools, and techniques used for programming of parallel computers.In this paper, several topics are examined. First, a brief survey of parallel computer architectures and typical application programs is performed. Then, a survey of available environments, languages, and tools is conducted to determine how parallel programming is currently performed. Finally, by considering architectures, applications, and environments, an attempt is made to find desirable characteristics for a parallel programming environment and a useful set of parallel programming tools
Epidemiology and management of epilepsy in Hong Kong: an overview
AbstractOver half of the estimated 50 million people with epilepsy live in Asia, but there has been limited information on the epidemiology, aetiology and management of epilepsy from this region. In this article, we summarise some of the main problems faced by patients and the current treatment options available in an urban area of China
Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
Confluence is a novel non-Intersection over Union (IoU) alternative to
Non-Maxima Suppression (NMS) in bounding box post-processing in object
detection. It overcomes the inherent limitations of IoU-based NMS variants to
provide a more stable, consistent predictor of bounding box clustering by using
a normalized Manhattan Distance inspired proximity metric to represent bounding
box clustering. Unlike Greedy and Soft NMS, it does not rely solely on
classification confidence scores to select optimal bounding boxes, instead
selecting the box which is closest to every other box within a given cluster
and removing highly confluent neighboring boxes. Confluence is experimentally
validated on the MS COCO and CrowdHuman benchmarks, improving Average Precision
by up to 2.3-3.8% and Average Recall by up to 5.3-7.2% when compared against
de-facto standard and state of the art NMS variants. Quantitative results are
supported by extensive qualitative analysis and threshold sensitivity analysis
experiments support the conclusion that Confluence is more robust than NMS
variants. Confluence represents a paradigm shift in bounding box processing,
with potential to replace IoU in bounding box regression processes.Comment: 13 page
Pre-corneal tear film thickness in humans measured with a novel technique.
PurposeThe purpose of this work was to gather preliminary data in normals and dry eye subjects, using a new, non-invasive imaging platform to measure the thickness of pre-corneal tear film.MethodsHuman subjects were screened for dry eye and classified as dry or normal. Tear film thickness over the inferior paracentral cornea was measured using laser illumination and a complementary metal-oxide-semiconductor (CMOS) camera. A previously developed mathematical model was used to calculate the thickness of the tear film by applying the principle of spatial auto-correlation function (ACF).ResultsMean tear film thickness values (±SD) were 3.05 μm (0.20) and 2.48 μm (0.32) on the initial visit for normals (n=18) and dry eye subjects (n=22), respectively, and were significantly different (p<0.001, 2-sample t-test). Repeatability was good between visit 1 and 2 for normals (intraclass correlation coefficient [ICC]=0.935) and dry eye subjects (ICC=0.950). Tear film thickness increased above baseline for the dry eye subjects following viscous drop instillation and remained significantly elevated for up to approximately 32 min (n=20; p<0.05 until 32 min; general linear mixed model and Dunnett's tests).ConclusionsThis technique for imaging the ocular surface appears to provide tear thickness values in agreement with other non-invasive methods. Moreover, the technique can differentiate between normal and dry eye patient types
Chemical aspects related to using recycled geopolymers as aggregates
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Despite extensive research into sustainability of geopolymers, end-of-life aspects have been largely overlooked. A recycling scenario is examined in this study. This requires an investigation of alkali leaching potential from a geopolymeric matrix. To study the feasibility of geopolymer cement (GPC) recycling, the migration of alkalis was evaluated for the first time on a microstructural level through energy dispersive X-ray (EDX) scanning electron microscopy (SEM) elemental mapping and leaching tests. Macroscale impacts were assessed through an investigation of Portland cement (PC) mortar properties affected by alkali concentration. Leaching tests indicated that alkalis immediately become available in aqueous environments, but the majority remain chemically or physically bound in the matrix. This type of leaching accelerates the initial setting of PC paste. Elemental mapping and EDX/SEM analysis showed a complex paste-aggregate interfacial transition zone. Exchange of calcium and sodium, revealed by the maps, resulted in the migration of sodium into the PC paste and the formation of additional calcium-silicon-based phases in the geopolymeric matrix. Strength values of mortars with 25% and 50% recycled aggregates (RA) showed negligible differences compared with the reference sample. Screening tests indicated a low potential for GPC RA inducing alkali-silica reaction. Transport of GPC RA alkalis and the underlying mechanisms were observed. This transport phenomenon was found to have minor effects on the properties of the PC mortar, indicating that recycling of geopolymers is a viable reuse practice.Peer reviewedFinal Published versio
InPCM: a network caching technique for improving the performance of TCP in wireless ad-hoc networks
We propose a novel mechanism called In-Network Packet Caching Mechanism (inPCM) to address TCP\u27s poor performance in IEEE 802.11 based multi-hop wireless networks. In particular, we address TCP\u27s inappropriate response to bursty and location dependent errors. The key concept is the use of intermediate nodes to perform packet recovery on behalf of TCP senders, similar to the well-known Snoop TCP but adapted to work over multi-hop wireless networks. We have conducted ns-2 simulation studies over a variety of network conditions and topologies. Our results confirm InPCM\u27s benefits to TCP in terms of delay and throughput. Moreover, it is immediately deployable without modifications to current protocols
Current Source DC-DC Converter for Undersea Fiber Optic Sensors
Today, there are nearly 400 cable networks spanning roughly 750,000 miles to connect our world together. With thousands of miles of cable already lying on the ocean floor, with thousands more to come, many have recognized a unique opportunity in using this cable infrastructure as an attachment point for sensors to help study and monitor the ocean. Placing sensors onto a submarine cable is not a simple task; the sensors will require power that they receive from a transmission cable, from a battery, or from the submarine cable itself. Unfortunately, the existing power feed configuration for submarine cables typically only accounts for cable repeaters and their specified current requirement. Therefore, a converter will be required to properly power the sensors. Because only the current is a known value (as the voltage will be variable depending on the position along the cable), a current source DC-DC converter must be used. This project entails the design and construction of a current source DC-DC converter that is intended to meet the following specifications for the sensors: step down an input current of 0.9 amps to 0.625 amps, maintain an output voltage of 24 volts, and an output power of 15 watts.
The goal of this project was to create a current-source DC-DC converter that stepped down a 0.9A input current to a 0.625A output current at 24V. The designed circuit successfully stepped down the input current to the proper output; however, further improvements will be needed to achieve the desired efficiency and output ripple specifications. Overall, the constructed circuit provides a proof of concept of the current source design that can be iterated upon for better results in the future
Application Of Hec-Ras Hydrodynamic Flood Modelling For River A Study Case Of Bertam Valley, Pahang
Dams are built to store water to adapt for changes in the catchment region, and they are undeniably one of the most efficient renewable energy sources. The dam will open the sluice gate to discharge certain amount of water when the level of water is almost exceeding the safety level of it. However, an overwhelming amount of discharges discharged from the dam's gated spillways would cause egregious damage to the downstream area. As a result, a study was conducted on the effects of various maximum discharges on the flow along a segment of the downstream river in Bertam Valley. The highlighted objectives for the study were to simulate a flood inundation of Bertam Valley using HEC-RAS model and to have validation and comparison of HEC-RAS model based on discharge value and water level. With HEC-RAS, a computer simulation was run to examine various maximum discharges released from the dam's gated spillway. The amount of water release used in the simulation were 10, 27.5, 75, 230 and 300 m³/s based on the recommendation dam release by Tenaga Nasional Berhad (TNB), according to a previous study. The result shows that the safest maximum release was 27.5 m³/s, that is when water started to overflow from the bank. The water depth, and velocity, as well as the travel time of the water flow at the discharge of 27.5 m³/s are 2.66 m, 1.33 m/s and 0.22 hour respectively. The study could be useful for warning to local inhabitants and to help dam management in controlling the amount of maximum discharge level from the dam
- …